Serveur d'exploration sur l'opéra

Attention, ce site est en cours de développement !
Attention, site généré par des moyens informatiques à partir de corpus bruts.
Les informations ne sont donc pas validées.

Automatic Rush Generation with Application to Theatre Performances

Identifieur interne : 000263 ( Main/Exploration ); précédent : 000262; suivant : 000264

Automatic Rush Generation with Application to Theatre Performances

Auteurs : Vineet Gandhi [France]

Source :

RBID : Hal:tel-01119207

Descripteurs français

English descriptors

Abstract

Professional quality videos of live staged performances are created by recording them fromdifferent appropriate viewpoints. These are then edited together to portray an eloquent storyreplete with the ability to draw out the intended emotion from the viewers. Creating such competentvideos typically requires a team of skilled camera operators to capture the scene frommultiple viewpoints. In this thesis, we explore an alternative approach where we automaticallycompute camera movements in post-production using specially designed computer visionmethods.A high resolution static camera replaces the plural camera crew and their efficient cameramovements are then simulated by virtually panning - tilting - zooming within the originalrecordings. We show that multiple virtual cameras can be simulated by choosing different trajectoriesof cropping windows inside the original recording. One of the key novelties of thiswork is an optimization framework for computing the virtual camera trajectories using the informationextracted from the original video based on computer vision techniques.The actors present on stage are considered as the most important elements of the scene.For the task of localizing and naming actors, we introduce generative models for learning viewindependent person and costume specific detectors from a set of labeled examples. We explainhow to learn the models from a small number of labeled keyframes or video tracks, and how todetect novel appearances of the actors in a maximum likelihood framework. We demonstratethat such actor specific models can accurately localize actors despite changes in view point andocclusions, and significantly improve the detection recall rates over generic detectors.The thesis then proposes an offline algorithm for tracking objects and actors in long videosequences using these actor specific models. Detections are first performed to independentlyselect candidate locations of the actor/object in each frame of the video. The candidate detectionsare then combined into smooth trajectories by minimizing a cost function accounting forfalse detections and occlusions.Using the actor tracks, we then describe a method for automatically generating multipleclips suitable for video editing by simulating pan-tilt-zoom camera movements within theframe of a single static camera. Our method requires only minimal user input to define thesubject matter of each sub-clip. The composition of each sub-clip is automatically computedin a novel convex optimization framework. Our approach encodes several common cinematographicpractices into a single convex cost function minimization problem, resulting inaesthetically-pleasing sub-clips which can easily be edited together using off-the-shelf multiclipvideo editing software.The proposed methods have been tested and validated on a challenging corpus of theatrerecordings. They open the way to novel applications of computer vision methods for costeffectivevideo production of live performances including, but not restricted to, theatre, musicand opera.

Url:


Affiliations:


Links toward previous steps (curation, corpus...)


Le document en format XML

<record>
<TEI>
<teiHeader>
<fileDesc>
<titleStmt>
<title xml:lang="en">Automatic Rush Generation with Application to Theatre Performances</title>
<title xml:lang="fr">Généation Automatique de Prises de Vues Cinématographiques avec Applications aux Captations de Théâtre</title>
<author>
<name sortKey="Gandhi, Vineet" sort="Gandhi, Vineet" uniqKey="Gandhi V" first="Vineet" last="Gandhi">Vineet Gandhi</name>
<affiliation wicri:level="1">
<hal:affiliation type="researchteam" xml:id="struct-174814" status="VALID">
<orgName>Intuitive Modeling and Animation for Interactive Graphics & Narrative Environments</orgName>
<orgName type="acronym">IMAGINE</orgName>
<desc>
<address>
<addrLine>655 avenue de l'Europe 38 334 Saint Ismier cedex</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.inria.fr/equipes/imagine</ref>
</desc>
<listRelation>
<relation active="#struct-24474" type="direct"></relation>
<relation active="#struct-3886" type="indirect"></relation>
<relation active="#struct-51016" type="indirect"></relation>
<relation active="#struct-300339" type="indirect"></relation>
<relation name="UMR5224" active="#struct-441569" type="indirect"></relation>
<relation active="#struct-300275" type="direct"></relation>
<relation active="#struct-2497" type="direct"></relation>
<relation active="#struct-300009" type="indirect"></relation>
</listRelation>
<tutelles>
<tutelle active="#struct-24474" type="direct">
<org type="laboratory" xml:id="struct-24474" status="VALID">
<orgName>Laboratoire Jean Kuntzmann</orgName>
<orgName type="acronym">LJK</orgName>
<desc>
<address>
<addrLine>Tour IRMA 51 rue des Mathématiques - 53 38041 GRENOBLE CEDEX 9</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://ljk.imag.fr</ref>
</desc>
<listRelation>
<relation active="#struct-3886" type="direct"></relation>
<relation active="#struct-51016" type="direct"></relation>
<relation active="#struct-300339" type="direct"></relation>
<relation name="UMR5224" active="#struct-441569" type="direct"></relation>
</listRelation>
</org>
</tutelle>
<tutelle active="#struct-3886" type="indirect">
<org type="institution" xml:id="struct-3886" status="OLD">
<orgName>Université Pierre Mendès France</orgName>
<orgName type="acronym">Grenoble 2 UPMF</orgName>
<date type="end">2015-12-31</date>
<desc>
<address>
<addrLine>BP 47 - 38040 Grenoble Cedex 9</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.upmf-grenoble.fr/</ref>
</desc>
</org>
</tutelle>
<tutelle active="#struct-51016" type="indirect">
<org type="institution" xml:id="struct-51016" status="OLD">
<orgName>Université Joseph Fourier</orgName>
<orgName type="acronym">UJF</orgName>
<date type="end">2015-12-31</date>
<desc>
<address>
<addrLine>BP 53 - 38041 Grenoble Cedex 9</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.ujf-grenoble.fr/</ref>
</desc>
</org>
</tutelle>
<tutelle active="#struct-300339" type="indirect">
<org type="institution" xml:id="struct-300339" status="VALID">
<orgName>Institut Polytechnique de Grenoble - Grenoble Institute of Technology</orgName>
<desc>
<address>
<country key="FR"></country>
</address>
</desc>
</org>
</tutelle>
<tutelle name="UMR5224" active="#struct-441569" type="indirect">
<org type="institution" xml:id="struct-441569" status="VALID">
<idno type="IdRef">02636817X</idno>
<idno type="ISNI">0000000122597504</idno>
<orgName>Centre National de la Recherche Scientifique</orgName>
<orgName type="acronym">CNRS</orgName>
<date type="start">1939-10-19</date>
<desc>
<address>
<country key="FR"></country>
</address>
<ref type="url">http://www.cnrs.fr/</ref>
</desc>
</org>
</tutelle>
<tutelle active="#struct-300275" type="direct">
<org type="institution" xml:id="struct-300275" status="VALID">
<orgName>Institut National Polytechnique de Grenoble (INPG)</orgName>
<desc>
<address>
<country key="FR"></country>
</address>
</desc>
</org>
</tutelle>
<tutelle active="#struct-2497" type="direct">
<org type="laboratory" xml:id="struct-2497" status="VALID">
<orgName>Inria Grenoble - Rhône-Alpes</orgName>
<desc>
<address>
<addrLine>Inovallée655 avenue de l'Europe38330 Montbonnot</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.inria.fr/centre/grenoble</ref>
</desc>
<listRelation>
<relation active="#struct-300009" type="direct"></relation>
</listRelation>
</org>
</tutelle>
<tutelle active="#struct-300009" type="indirect">
<org type="institution" xml:id="struct-300009" status="VALID">
<orgName>Institut National de Recherche en Informatique et en Automatique</orgName>
<orgName type="acronym">Inria</orgName>
<desc>
<address>
<addrLine>Domaine de VoluceauRocquencourt - BP 10578153 Le Chesnay Cedex</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.inria.fr/en/</ref>
</desc>
</org>
</tutelle>
</tutelles>
</hal:affiliation>
<country>France</country>
<placeName>
<settlement type="city">Grenoble</settlement>
<region type="region" nuts="2">Rhône-Alpes</region>
</placeName>
<orgName type="university">Université Pierre-Mendès-France</orgName>
<orgName type="institution" wicri:auto="newGroup">Université de Grenoble</orgName>
<placeName>
<settlement type="city">Grenoble</settlement>
<region type="region" nuts="2">Rhône-Alpes</region>
</placeName>
<orgName type="university">Université Joseph Fourier</orgName>
<orgName type="institution" wicri:auto="newGroup">Université de Grenoble</orgName>
</affiliation>
</author>
</titleStmt>
<publicationStmt>
<idno type="wicri:source">HAL</idno>
<idno type="RBID">Hal:tel-01119207</idno>
<idno type="halId">tel-01119207</idno>
<idno type="halUri">https://tel.archives-ouvertes.fr/tel-01119207</idno>
<idno type="url">https://tel.archives-ouvertes.fr/tel-01119207</idno>
<date when="2014-12-18">2014-12-18</date>
<idno type="wicri:Area/Hal/Corpus">000032</idno>
<idno type="wicri:Area/Hal/Curation">000032</idno>
<idno type="wicri:Area/Hal/Checkpoint">000052</idno>
<idno type="wicri:Area/Main/Merge">000263</idno>
<idno type="wicri:Area/Main/Curation">000263</idno>
<idno type="wicri:Area/Main/Exploration">000263</idno>
</publicationStmt>
<sourceDesc>
<biblStruct>
<analytic>
<title xml:lang="en">Automatic Rush Generation with Application to Theatre Performances</title>
<title xml:lang="fr">Généation Automatique de Prises de Vues Cinématographiques avec Applications aux Captations de Théâtre</title>
<author>
<name sortKey="Gandhi, Vineet" sort="Gandhi, Vineet" uniqKey="Gandhi V" first="Vineet" last="Gandhi">Vineet Gandhi</name>
<affiliation wicri:level="1">
<hal:affiliation type="researchteam" xml:id="struct-174814" status="VALID">
<orgName>Intuitive Modeling and Animation for Interactive Graphics & Narrative Environments</orgName>
<orgName type="acronym">IMAGINE</orgName>
<desc>
<address>
<addrLine>655 avenue de l'Europe 38 334 Saint Ismier cedex</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.inria.fr/equipes/imagine</ref>
</desc>
<listRelation>
<relation active="#struct-24474" type="direct"></relation>
<relation active="#struct-3886" type="indirect"></relation>
<relation active="#struct-51016" type="indirect"></relation>
<relation active="#struct-300339" type="indirect"></relation>
<relation name="UMR5224" active="#struct-441569" type="indirect"></relation>
<relation active="#struct-300275" type="direct"></relation>
<relation active="#struct-2497" type="direct"></relation>
<relation active="#struct-300009" type="indirect"></relation>
</listRelation>
<tutelles>
<tutelle active="#struct-24474" type="direct">
<org type="laboratory" xml:id="struct-24474" status="VALID">
<orgName>Laboratoire Jean Kuntzmann</orgName>
<orgName type="acronym">LJK</orgName>
<desc>
<address>
<addrLine>Tour IRMA 51 rue des Mathématiques - 53 38041 GRENOBLE CEDEX 9</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://ljk.imag.fr</ref>
</desc>
<listRelation>
<relation active="#struct-3886" type="direct"></relation>
<relation active="#struct-51016" type="direct"></relation>
<relation active="#struct-300339" type="direct"></relation>
<relation name="UMR5224" active="#struct-441569" type="direct"></relation>
</listRelation>
</org>
</tutelle>
<tutelle active="#struct-3886" type="indirect">
<org type="institution" xml:id="struct-3886" status="OLD">
<orgName>Université Pierre Mendès France</orgName>
<orgName type="acronym">Grenoble 2 UPMF</orgName>
<date type="end">2015-12-31</date>
<desc>
<address>
<addrLine>BP 47 - 38040 Grenoble Cedex 9</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.upmf-grenoble.fr/</ref>
</desc>
</org>
</tutelle>
<tutelle active="#struct-51016" type="indirect">
<org type="institution" xml:id="struct-51016" status="OLD">
<orgName>Université Joseph Fourier</orgName>
<orgName type="acronym">UJF</orgName>
<date type="end">2015-12-31</date>
<desc>
<address>
<addrLine>BP 53 - 38041 Grenoble Cedex 9</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.ujf-grenoble.fr/</ref>
</desc>
</org>
</tutelle>
<tutelle active="#struct-300339" type="indirect">
<org type="institution" xml:id="struct-300339" status="VALID">
<orgName>Institut Polytechnique de Grenoble - Grenoble Institute of Technology</orgName>
<desc>
<address>
<country key="FR"></country>
</address>
</desc>
</org>
</tutelle>
<tutelle name="UMR5224" active="#struct-441569" type="indirect">
<org type="institution" xml:id="struct-441569" status="VALID">
<idno type="IdRef">02636817X</idno>
<idno type="ISNI">0000000122597504</idno>
<orgName>Centre National de la Recherche Scientifique</orgName>
<orgName type="acronym">CNRS</orgName>
<date type="start">1939-10-19</date>
<desc>
<address>
<country key="FR"></country>
</address>
<ref type="url">http://www.cnrs.fr/</ref>
</desc>
</org>
</tutelle>
<tutelle active="#struct-300275" type="direct">
<org type="institution" xml:id="struct-300275" status="VALID">
<orgName>Institut National Polytechnique de Grenoble (INPG)</orgName>
<desc>
<address>
<country key="FR"></country>
</address>
</desc>
</org>
</tutelle>
<tutelle active="#struct-2497" type="direct">
<org type="laboratory" xml:id="struct-2497" status="VALID">
<orgName>Inria Grenoble - Rhône-Alpes</orgName>
<desc>
<address>
<addrLine>Inovallée655 avenue de l'Europe38330 Montbonnot</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.inria.fr/centre/grenoble</ref>
</desc>
<listRelation>
<relation active="#struct-300009" type="direct"></relation>
</listRelation>
</org>
</tutelle>
<tutelle active="#struct-300009" type="indirect">
<org type="institution" xml:id="struct-300009" status="VALID">
<orgName>Institut National de Recherche en Informatique et en Automatique</orgName>
<orgName type="acronym">Inria</orgName>
<desc>
<address>
<addrLine>Domaine de VoluceauRocquencourt - BP 10578153 Le Chesnay Cedex</addrLine>
<country key="FR"></country>
</address>
<ref type="url">http://www.inria.fr/en/</ref>
</desc>
</org>
</tutelle>
</tutelles>
</hal:affiliation>
<country>France</country>
<placeName>
<settlement type="city">Grenoble</settlement>
<region type="region" nuts="2">Rhône-Alpes</region>
</placeName>
<orgName type="university">Université Pierre-Mendès-France</orgName>
<orgName type="institution" wicri:auto="newGroup">Université de Grenoble</orgName>
<placeName>
<settlement type="city">Grenoble</settlement>
<region type="region" nuts="2">Rhône-Alpes</region>
</placeName>
<orgName type="university">Université Joseph Fourier</orgName>
<orgName type="institution" wicri:auto="newGroup">Université de Grenoble</orgName>
</affiliation>
</author>
</analytic>
</biblStruct>
</sourceDesc>
</fileDesc>
<profileDesc>
<textClass>
<keywords scheme="mix" xml:lang="en">
<term>Actor detection</term>
<term>Actor recognition</term>
<term>And/Or graph</term>
<term>Film Editing</term>
<term>Virtual cinematography</term>
</keywords>
<keywords scheme="mix" xml:lang="fr">
<term>Cinématographie virtuelle</term>
<term>Détection d'acteurs</term>
<term>Montage vidéo</term>
<term>Reconnaissance d'acteurs</term>
</keywords>
</textClass>
</profileDesc>
</teiHeader>
<front>
<div type="abstract" xml:lang="en">Professional quality videos of live staged performances are created by recording them fromdifferent appropriate viewpoints. These are then edited together to portray an eloquent storyreplete with the ability to draw out the intended emotion from the viewers. Creating such competentvideos typically requires a team of skilled camera operators to capture the scene frommultiple viewpoints. In this thesis, we explore an alternative approach where we automaticallycompute camera movements in post-production using specially designed computer visionmethods.A high resolution static camera replaces the plural camera crew and their efficient cameramovements are then simulated by virtually panning - tilting - zooming within the originalrecordings. We show that multiple virtual cameras can be simulated by choosing different trajectoriesof cropping windows inside the original recording. One of the key novelties of thiswork is an optimization framework for computing the virtual camera trajectories using the informationextracted from the original video based on computer vision techniques.The actors present on stage are considered as the most important elements of the scene.For the task of localizing and naming actors, we introduce generative models for learning viewindependent person and costume specific detectors from a set of labeled examples. We explainhow to learn the models from a small number of labeled keyframes or video tracks, and how todetect novel appearances of the actors in a maximum likelihood framework. We demonstratethat such actor specific models can accurately localize actors despite changes in view point andocclusions, and significantly improve the detection recall rates over generic detectors.The thesis then proposes an offline algorithm for tracking objects and actors in long videosequences using these actor specific models. Detections are first performed to independentlyselect candidate locations of the actor/object in each frame of the video. The candidate detectionsare then combined into smooth trajectories by minimizing a cost function accounting forfalse detections and occlusions.Using the actor tracks, we then describe a method for automatically generating multipleclips suitable for video editing by simulating pan-tilt-zoom camera movements within theframe of a single static camera. Our method requires only minimal user input to define thesubject matter of each sub-clip. The composition of each sub-clip is automatically computedin a novel convex optimization framework. Our approach encodes several common cinematographicpractices into a single convex cost function minimization problem, resulting inaesthetically-pleasing sub-clips which can easily be edited together using off-the-shelf multiclipvideo editing software.The proposed methods have been tested and validated on a challenging corpus of theatrerecordings. They open the way to novel applications of computer vision methods for costeffectivevideo production of live performances including, but not restricted to, theatre, musicand opera.</div>
</front>
</TEI>
<affiliations>
<list>
<country>
<li>France</li>
</country>
<region>
<li>Rhône-Alpes</li>
</region>
<settlement>
<li>Grenoble</li>
</settlement>
<orgName>
<li>Université Joseph Fourier</li>
<li>Université Pierre-Mendès-France</li>
<li>Université de Grenoble</li>
</orgName>
</list>
<tree>
<country name="France">
<region name="Rhône-Alpes">
<name sortKey="Gandhi, Vineet" sort="Gandhi, Vineet" uniqKey="Gandhi V" first="Vineet" last="Gandhi">Vineet Gandhi</name>
</region>
</country>
</tree>
</affiliations>
</record>

Pour manipuler ce document sous Unix (Dilib)

EXPLOR_STEP=$WICRI_ROOT/Wicri/Musique/explor/OperaV1/Data/Main/Exploration
HfdSelect -h $EXPLOR_STEP/biblio.hfd -nk 000263 | SxmlIndent | more

Ou

HfdSelect -h $EXPLOR_AREA/Data/Main/Exploration/biblio.hfd -nk 000263 | SxmlIndent | more

Pour mettre un lien sur cette page dans le réseau Wicri

{{Explor lien
   |wiki=    Wicri/Musique
   |area=    OperaV1
   |flux=    Main
   |étape=   Exploration
   |type=    RBID
   |clé=     Hal:tel-01119207
   |texte=   Automatic Rush Generation with Application to Theatre Performances
}}

Wicri

This area was generated with Dilib version V0.6.21.
Data generation: Thu Apr 14 14:59:05 2016. Site generation: Thu Jan 4 23:09:23 2024